Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
2.
Sensors (Basel) ; 22(20)2022 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-36298367

RESUMO

Background: Digital clinical measures collected via various digital sensing technologies such as smartphones, smartwatches, wearables, and ingestible and implantable sensors are increasingly used by individuals and clinicians to capture the health outcomes or behavioral and physiological characteristics of individuals. Time series classification (TSC) is very commonly used for modeling digital clinical measures. While deep learning models for TSC are very common and powerful, there exist some fundamental challenges. This review presents the non-deep learning models that are commonly used for time series classification in biomedical applications that can achieve high performance. Objective: We performed a systematic review to characterize the techniques that are used in time series classification of digital clinical measures throughout all the stages of data processing and model building. Methods: We conducted a literature search on PubMed, as well as the Institute of Electrical and Electronics Engineers (IEEE), Web of Science, and SCOPUS databases using a range of search terms to retrieve peer-reviewed articles that report on the academic research about digital clinical measures from a five-year period between June 2016 and June 2021. We identified and categorized the research studies based on the types of classification algorithms and sensor input types. Results: We found 452 papers in total from four different databases: PubMed, IEEE, Web of Science Database, and SCOPUS. After removing duplicates and irrelevant papers, 135 articles remained for detailed review and data extraction. Among these, engineered features using time series methods that were subsequently fed into widely used machine learning classifiers were the most commonly used technique, and also most frequently achieved the best performance metrics (77 out of 135 articles). Statistical modeling (24 out of 135 articles) algorithms were the second most common and also the second-best classification technique. Conclusions: In this review paper, summaries of the time series classification models and interpretation methods for biomedical applications are summarized and categorized. While high time series classification performance has been achieved in digital clinical, physiological, or biomedical measures, no standard benchmark datasets, modeling methods, or reporting methodology exist. There is no single widely used method for time series model development or feature interpretation, however many different methods have proven successful.


Assuntos
Algoritmos , Aprendizado de Máquina , Humanos , Smartphone , Fatores de Tempo
3.
NPJ Digit Med ; 5(1): 130, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-36050372

RESUMO

Mass surveillance testing can help control outbreaks of infectious diseases such as COVID-19. However, diagnostic test shortages are prevalent globally and continue to occur in the US with the onset of new COVID-19 variants and emerging diseases like monkeypox, demonstrating an unprecedented need for improving our current methods for mass surveillance testing. By targeting surveillance testing toward individuals who are most likely to be infected and, thus, increasing the testing positivity rate (i.e., percent positive in the surveillance group), fewer tests are needed to capture the same number of positive cases. Here, we developed an Intelligent Testing Allocation (ITA) method by leveraging data from the CovIdentify study (6765 participants) and the MyPHD study (8580 participants), including smartwatch data from 1265 individuals of whom 126 tested positive for COVID-19. Our rigorous model and parameter search uncovered the optimal time periods and aggregate metrics for monitoring continuous digital biomarkers to increase the positivity rate of COVID-19 diagnostic testing. We found that resting heart rate (RHR) features distinguished between COVID-19-positive and -negative cases earlier in the course of the infection than steps features, as early as 10 and 5 days prior to the diagnostic test, respectively. We also found that including steps features increased the area under the receiver operating characteristic curve (AUC-ROC) by 7-11% when compared with RHR features alone, while including RHR features improved the AUC of the ITA model's precision-recall curve (AUC-PR) by 38-50% when compared with steps features alone. The best AUC-ROC (0.73 ± 0.14 and 0.77 on the cross-validated training set and independent test set, respectively) and AUC-PR (0.55 ± 0.21 and 0.24) were achieved by using data from a single device type (Fitbit) with high-resolution (minute-level) data. Finally, we show that ITA generates up to a 6.5-fold increase in the positivity rate in the cross-validated training set and up to a 4.5-fold increase in the positivity rate in the independent test set, including both symptomatic and asymptomatic (up to 27%) individuals. Our findings suggest that, if deployed on a large scale and without needing self-reported symptoms, the ITA method could improve the allocation of diagnostic testing resources and reduce the burden of test shortages.

4.
IEEE Trans Biomed Eng ; 64(7): 1631-1637, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28113229

RESUMO

A new thin-filmed perfusion sensor was developed using a heat flux gauge, thin-film thermocouple, and a heating element. This sensor, termed "CHFT+," is an enhancement of the previously established combined heat flux-temperature (CHFT) sensor technology predominately used to quantify the severity of burns [1]. The CHFT+ sensor was uniquely designed to measure tissue perfusion on explanted organs destined for transplantation, but could be functionalized and used in a wide variety of other biomedical applications. Exploiting the thin and semiflexible nature of the new CHFT+ sensor assembly, perfusion measurements can be made from the underside of the organ-providing a quantitative indirect measure of capillary pressure occlusion. Results from a live tissue test demonstrated, for the first time, the effects of pressure occlusion on an explanted porcine kidney. CHFT+ sensors were placed on top of and underneath 18 kidneys to measure and compare perfusion at perfusate temperatures of 5 and 20 °C. The data collected show a greater perfusion on the topside than the underside of the specimen for the length of the experiment. This indicates that the pressure occlusion is truly affecting the perfusion, and, thus, the overall preservation of explanted organs. Moreover, the results demonstrate the effect of preservation temperature on the tissue vasculature. Focusing on the topside perfusion only, the 20 °C perfusion was greater than the 5 °C perfusion, likely due to the vasoconstrictive response at the lower perfusion temperatures.


Assuntos
Calefação/instrumentação , Transplante de Rim , Preservação de Órgãos/efeitos adversos , Obstrução da Artéria Renal/etiologia , Obstrução da Artéria Renal/fisiopatologia , Artéria Renal/fisiopatologia , Termografia/instrumentação , Animais , Permeabilidade Capilar , Desenho de Equipamento , Análise de Falha de Equipamento , Técnicas In Vitro , Obstrução da Artéria Renal/diagnóstico , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Suínos , Condutividade Térmica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...